Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
J Am Med Inform Assoc ; 31(4): 910-918, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38308819

RESUMO

OBJECTIVES: Despite federally mandated collection of sex and gender demographics in the electronic health record (EHR), longitudinal assessments are lacking. We assessed sex and gender demographic field utilization using EHR metadata. MATERIALS AND METHODS: Patients ≥18 years of age in the Mass General Brigham health system with a first Legal Sex entry (registration requirement) between January 8, 2018 and January 1, 2022 were included in this retrospective study. Metadata for all sex and gender fields (Legal Sex, Sex Assigned at Birth [SAAB], Gender Identity) were quantified by completion rates, user types, and longitudinal change. A nested qualitative study of providers from specialties with high and low field use identified themes related to utilization. RESULTS: 1 576 120 patients met inclusion criteria: 100% had a Legal Sex, 20% a Gender Identity, and 19% a SAAB; 321 185 patients had field changes other than initial Legal Sex entry. About 2% of patients had a subsequent Legal Sex change, and 25% of those had ≥2 changes; 20% of patients had ≥1 update to Gender Identity and 19% to SAAB. Excluding the first Legal Sex entry, administrators made most changes (67%) across all fields, followed by patients (25%), providers (7.2%), and automated Health Level-7 (HL7) interface messages (0.7%). Provider utilization varied by subspecialty; themes related to systems barriers and personal perceptions were identified. DISCUSSION: Sex and gender demographic fields are primarily used by administrators and raise concern about data accuracy; provider use is heterogenous and lacking. Provider awareness of field availability and variable workflows may impede use. CONCLUSION: EHR metadata highlights areas for improvement of sex and gender field utilization.


Assuntos
Identidade de Gênero , Pessoas Transgênero , Recém-Nascido , Humanos , Masculino , Feminino , Registros Eletrônicos de Saúde , Metadados , Estudos Retrospectivos , Demografia
2.
Ann Emerg Med ; 81(6): 738-748, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36682997

RESUMO

STUDY OBJECTIVE: Early notification of admissions from the emergency department (ED) may allow hospitals to plan for inpatient bed demand. This study aimed to assess Epic's ED Likelihood to Occupy an Inpatient Bed predictive model and its application in improving hospital bed planning workflows. METHODS: All ED adult (18 years and older) visits from September 2021 to August 2022 at a large regional health care system were included. The primary outcome was inpatient admission. The predictive model is a random forest algorithm that uses demographic and clinical features. The model was implemented prospectively, with scores generated every 15 minutes. The area under the receiver operator curves (AUROC) and precision-recall curves (AUPRC) were calculated using the maximum score prior to the outcome and for each prediction independently. Test characteristics and lead time were calculated over a range of model score thresholds. RESULTS: Over 11 months, 329,194 encounters were evaluated, with an incidence of inpatient admission of 25.4%. The encounter-level AUROC was 0.849 (95% confidence interval [CI], 0.848 to 0.851), and the AUPRC was 0.643 (95% CI, 0.640 to 0.647). With a prediction horizon of 6 hours, the AUROC was 0.758 (95% CI, 0.758 to 0.759,) and the AUPRC was 0.470 (95% CI, 0.469 to 0.471). At a predictive model threshold of 40, the sensitivity was 0.49, the positive predictive value was 0.65, and the median lead-time warning was 127 minutes before the inpatient bed request. CONCLUSION: The Epic ED Likelihood to Occupy an Inpatient Bed model may improve hospital bed planning workflows. Further study is needed to determine its operational effect.


Assuntos
Pacientes Internados , Admissão do Paciente , Adulto , Humanos , Estudos Prospectivos , Hospitalização , Serviço Hospitalar de Emergência , Estudos Retrospectivos
3.
Ann Emerg Med ; 81(4): 485-491, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36669909

RESUMO

STUDY OBJECTIVE: Delays in the second dose of antibiotics in the emergency department (ED) are associated with increased morbidity and mortality in patients with serious infections. We analyzed the influence of clinical decision support to prevent delays in second doses of broad-spectrum antibiotics in the ED. METHODS: We allocated adult patients who received cefepime or piperacillin/tazobactam in 9 EDs within an integrated health care system to an electronic alert that reminded ED clinicians to reorder antibiotics at the appropriate interval vs usual care. The primary outcome was a median delay in antibiotic administration. Secondary outcomes were rates of intensive care unit (ICU) admission, hospital mortality, and hospital length of stay. We included a post hoc secondary outcome of frequency of major delay (>25% of expected interval for second antibiotic dose). RESULTS: A total of 1,113 ED patients treated with cefepime or piperacillin/tazobactam were enrolled in the study, of whom 420 remained under ED care when their second dose was due and were included in the final analysis. The clinical decision support tool was associated with reduced antibiotic delays (median difference 35 minutes, 95% confidence interval [CI], 5 to 65). There were no differences in ICU transfers, inpatient mortality, or hospital length of stay. The clinical decision support tool was associated with decreased probability of major delay (absolute risk reduction 13%, 95% CI, 6 to 20). CONCLUSIONS: The implementation of a clinical decision support alert reminding clinicians to reorder second doses of antibiotics was associated with a reduction in the length and frequency of antibiotic delays in the ED. There was no effect on the rates of ICU transfers, inpatient mortality, or hospital length of stay.


Assuntos
Antibacterianos , Hospitalização , Adulto , Humanos , Antibacterianos/uso terapêutico , Cefepima , Combinação Piperacilina e Tazobactam , Serviço Hospitalar de Emergência , Tempo de Internação , Estudos Retrospectivos
4.
Appl Clin Inform ; 13(5): 1024-1032, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36288748

RESUMO

OBJECTIVES: To improve clinical decision support (CDS) by allowing users to provide real-time feedback when they interact with CDS tools and by creating processes for responding to and acting on this feedback. METHODS: Two organizations implemented similar real-time feedback tools and processes in their electronic health record and gathered data over a 30-month period. At both sites, users could provide feedback by using Likert feedback links embedded in all end-user facing alerts, with results stored outside the electronic health record, and provide feedback as a comment when they overrode an alert. Both systems are monitored daily by clinical informatics teams. RESULTS: The two sites received 2,639 Likert feedback comments and 623,270 override comments over a 30-month period. Through four case studies, we describe our use of end-user feedback to rapidly respond to build errors, as well as identifying inaccurate knowledge management, user-interface issues, and unique workflows. CONCLUSION: Feedback on CDS tools can be solicited in multiple ways, and it contains valuable and actionable suggestions to improve CDS alerts. Additionally, end users appreciate knowing their feedback is being received and may also make other suggestions to improve the electronic health record. Incorporation of end-user feedback into CDS monitoring, evaluation, and remediation is a way to improve CDS.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Retroalimentação , Registros Eletrônicos de Saúde , Fluxo de Trabalho
5.
Appl Clin Inform ; 13(4): 910-915, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-36170882

RESUMO

BACKGROUND: Computerized clinical decision support (CDS) used in electronic health record systems (EHRs) has led to positive outcomes as well as unintended consequences, such as alert fatigue. Characteristics of the EHR session can be used to restrict CDS tools and increase their relevance, but implications of this approach are not rigorously studied. OBJECTIVES: To assess the utility of using "login location" of EHR users-that is, the location they chose on the login screen-as a variable in the CDS logic. METHODS: We measured concordance between user's login location and the location of the patients they placed orders for and conducted stratified analyses by user groups. We also estimated how often login location data may be stale or inaccurate. RESULTS: One in five CDS alerts incorporated the EHR users' login location into their logic. Analysis of nearly 2 million orders placed by nearly 8,000 users showed that concordance between login location and patient location was high for nurses, nurse practitioners, and physician assistance (all >95%), but lower for fellows (77%) and residents (55%). When providers switched between patients in the EHR, they usually did not update their login location accordingly. CONCLUSION: CDS alerts commonly incorporate user's login location into their logic. User's login location is often the same as the location of the patient the user is providing care for, but substantial discordance can be observed for certain user groups. While this may provide additional information that could be useful to the CDS logic, a substantial amount of discordance happened in specific user groups or when users appeared not to change their login location across different sessions. Those who design CDS alerts should consider a data-driven approach to evaluate the appropriateness of login location for each use case.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Médicos , Registros Eletrônicos de Saúde , Humanos
6.
J Am Med Inform Assoc ; 29(11): 1972-1975, 2022 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-36040207

RESUMO

OBJECTIVE: To identify common medication route-related causes of clinical decision support (CDS) malfunctions and best practices for avoiding them. MATERIALS AND METHODS: Case series of medication route-related CDS malfunctions from diverse healthcare provider organizations. RESULTS: Nine cases were identified and described, including both false-positive and false-negative alert scenarios. A common cause was the inclusion of nonsystemically available medication routes in value sets (eg, eye drops, ear drops, or topical preparations) when only systemically available routes were appropriate. DISCUSSION: These value set errors are common, occur across healthcare provider organizations and electronic health record (EHR) systems, affect many different types of medications, and can impact the accuracy of CDS interventions. New knowledge management tools and processes for auditing existing value sets and supporting the creation of new value sets can mitigate many of these issues. Furthermore, value set issues can adversely affect other aspects of the EHR, such as quality reporting and population health management. CONCLUSION: Value set issues related to medication routes are widespread and can lead to CDS malfunctions. Organizations should make appropriate investments in knowledge management tools and strategies, such as those outlined in our recommendations.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Sistemas de Registro de Ordens Médicas , Registros Eletrônicos de Saúde , Soluções Oftálmicas , Pesquisa , Software
7.
J Am Med Inform Assoc ; 29(12): 2124-2127, 2022 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-36036367

RESUMO

Monkeypox virus was historically rare outside of West and Central Africa until the current 2022 global outbreak, which has required clinicians to be alert to identify individuals with possible monkeypox, institute isolation, and take appropriate next steps in evaluation and management. Clinical decision support systems (CDSS), which have been shown to improve adherence to clinical guidelines, can support frontline clinicians in applying the most current evaluation and management guidance in the setting of an emerging infectious disease outbreak when those guidelines are evolving over time. Here, we describe the rapid development and implementation of a CDSS tool embedded in the electronic health record to guide frontline clinicians in the diagnostic evaluation of monkeypox infection and triage patients with potential monkeypox infection to individualized infectious disease physician review. We also present data on the initial performance of this tool in a large integrated healthcare system.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Mpox , Médicos , Humanos , Mpox/epidemiologia , Surtos de Doenças , Registros Eletrônicos de Saúde
8.
J Am Med Inform Assoc ; 29(10): 1705-1714, 2022 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-35877074

RESUMO

OBJECTIVE: Surviving Sepsis guidelines recommend blood cultures before administration of intravenous (IV) antibiotics for patients with sepsis or moderate to high risk of bacteremia. Clinical decision support (CDS) that reminds emergency department (ED) providers to obtain blood cultures when ordering IV antibiotics may lead to improvements in this process measure. METHODS: This was a multicenter causal impact analysis comparing timely blood culture collections prior to IV antibiotics for adult ED patients 1 year before and after a CDS intervention implementation in the electronic health record. A Bayesian structured time-series model compared daily timely blood cultures collected compared to a forecasted synthetic control. Mixed effects models evaluated the impact of the intervention controlling for confounders. RESULTS: The analysis included 54 538 patients over 2 years. In the baseline phase, 46.1% had blood cultures prior to IV antibiotics, compared to 58.8% after the intervention. Causal impact analysis determined an absolute increase of 13.1% (95% CI 10.4-15.7%) of timely blood culture collections overall, although the difference in patients with a sepsis diagnosis or who met CDC Adult Sepsis Event criteria was not significant, absolute difference 8.0% (95% CI -0.2 to 15.8). Blood culture positivity increased in the intervention phase, and contamination rates were similar in both study phases. DISCUSSION: CDS improved blood culture collection before IV antibiotics in the ED, without increasing overutilization. CONCLUSION: A simple CDS alert increased timely blood culture collections in ED patients for whom concern for infection was high enough to warrant IV antibiotics.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Sepse , Adulto , Antibacterianos/uso terapêutico , Teorema de Bayes , Hemocultura , Serviço Hospitalar de Emergência , Humanos , Estudos Retrospectivos , Sepse/diagnóstico , Sepse/tratamento farmacológico
9.
Ann Emerg Med ; 78(3): 370-380, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33975733

RESUMO

STUDY OBJECTIVE: Tetanus is the most common vaccination given in the emergency department; yet, administrations of tetanus vaccine boosters in the ED may not comply with the US Centers for Disease Control and Prevention's recommended vaccination schedule. We implemented a clinical decision support alert in the electronic health record that warned providers when ordering a tetanus vaccine if a prior one had been given within 10 years and studied its efficacy to reduce potentially unnecessary vaccines in the ED. METHODS: This was a retrospective, quasi-experimental, 1-group, pretest-posttest study in 3 hospital EDs in Boston, MA. We studied adult patients for whom tetanus vaccines were ordered despite a history of vaccination within the prior 10 years. We compared the number of potentially unnecessary tetanus vaccine administrations in a baseline phase (when the clinical decision support alert was not visible) versus an intervention phase. RESULTS: Of eligible patients, 22.1% (95% confidence interval [CI] 21.8% to 22.4%) had prior tetanus vaccines within 5 years, 12.8% (95% CI 12.5% to 13.0%) within 5 to 10 years, 3.8% (95% CI 3.6% to 3.9%) more than 10 years ago, and 61.3% (95% CI 60.9% to 61.7%) had no prior tetanus vaccination documentation. Of 60,983 encounters, 337 met the inclusion criteria. A tetanus vaccination was administered in 91% (95% CI 87% to 96%) of encounters in the baseline phase, compared to 55% (95% CI 47% to 62%) during the intervention. The absolute risk reduction was 36.7% (95% CI 28.0% to 45.4%), and the number of encounters needed to alert to avoid 1 potentially unnecessary tetanus vaccine (number needed to treat) was 2.7 (95% CI 2.2% to 3.6%). For patients with tetanus vaccines within the prior 5 years, the absolute risk reduction was 47.9% (95% CI 35.5 % to 60.3%) and the number needed to treat was 2.1 (95% CI 1.7% to 2.8%). CONCLUSION: A clinical decision support alert that warns ED clinicians that a patient may have an up-to-date tetanus vaccination status reduces potentially unnecessary vaccinations.


Assuntos
Sistemas de Apoio a Decisões Clínicas/normas , Esquemas de Imunização , Toxoide Tetânico/administração & dosagem , Vacinação/estatística & dados numéricos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ensaios Clínicos Controlados não Aleatórios como Assunto , Melhoria de Qualidade , Estudos Retrospectivos , Toxoide Tetânico/efeitos adversos , Toxoide Tetânico/imunologia , Procedimentos Desnecessários , Adulto Jovem
10.
Clin Infect Dis ; 73(12): 2248-2256, 2021 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-33564833

RESUMO

BACKGROUND: Isolation of hospitalized persons under investigation (PUIs) for coronavirus disease 2019 (COVID-19) reduces nosocomial transmission risk. Efficient evaluation of PUIs is needed to preserve scarce healthcare resources. We describe the development, implementation, and outcomes of an inpatient diagnostic algorithm and clinical decision support system (CDSS) to evaluate PUIs. METHODS: We conducted a pre-post study of CORAL (COvid Risk cALculator), a CDSS that guides frontline clinicians through a risk-stratified COVID-19 diagnostic workup, removes transmission-based precautions when workup is complete and negative, and triages complex cases to infectious diseases (ID) physician review. Before CORAL, ID physicians reviewed all PUI records to guide workup and precautions. After CORAL, frontline clinicians evaluated PUIs directly using CORAL. We compared pre- and post-CORAL frequency of repeated severe acute respiratory syndrome coronavirus 2 nucleic acid amplification tests (NAATs), time from NAAT result to PUI status discontinuation, total duration of PUI status, and ID physician work hours, using linear and logistic regression, adjusted for COVID-19 incidence. RESULTS: Fewer PUIs underwent repeated testing after an initial negative NAAT after CORAL than before CORAL (54% vs 67%, respectively; adjusted odd ratio, 0.53 [95% confidence interval, .44-.63]; P < .01). CORAL significantly reduced average time to PUI status discontinuation (adjusted difference [standard error], -7.4 [0.8] hours per patient), total duration of PUI status (-19.5 [1.9] hours per patient), and average ID physician work-hours (-57.4 [2.0] hours per day) (all P < .01). No patients had a positive NAAT result within 7 days after discontinuation of precautions via CORAL. CONCLUSIONS: CORAL is an efficient and effective CDSS to guide frontline clinicians through the diagnostic evaluation of PUIs and safe discontinuation of precautions.


Assuntos
Antozoários , COVID-19 , Animais , Humanos , Técnicas de Amplificação de Ácido Nucleico , Razão de Chances , SARS-CoV-2
12.
Infect Control Hosp Epidemiol ; 41(12): 1449-1451, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32847641
14.
J Gen Intern Med ; 34(11): 2530-2535, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31512185

RESUMO

BACKGROUND: Providers should estimate a patient's chance of surviving an in-hospital cardiac arrest with good neurologic outcome when initially admitting a patient, in order to participate in shared decision making with patients about their code status. OBJECTIVE: To examine the utility of the "Good Outcome Following Attempted Resuscitation (GO-FAR)" score in predicting prognosis after in-hospital cardiac arrest in a US trauma center. DESIGN: Retrospective observational study SETTING: Level 1 trauma and academic hospital in Minneapolis, MN, USA PARTICIPANTS: All cases of pulseless in-hospital cardiac arrest occurring in adults (18 years or older) admitted to the hospital between Jan 2009 and Sept 2018 are included. For patients with more than one arrest, only the first was included in this analysis. MAIN MEASURES: For each patient with verified in-hospital cardiac arrest, we calculated a GO-FAR score based on variables present in the electronic health record at time of admission. Pre-determined outcomes included survival to discharge and survival to discharge with good neurologic outcome. KEY RESULTS: From 2009 to 2018, 403 adults suffered in-hospital cardiac arrest. A majority (65.5%) were male with a mean age of 60.3 years. Overall survival to discharge was 33.0%; survival to discharge with good neurologic outcome was 17.4%. GO-FAR score calculated at the time of admission correlated with survival to discharge with good neurologic outcome (AUC 0.68), which occurred in 5.3% of patients with below average survival likelihood by GO-FAR score, 22.5% with average survival likelihood, and 34.1% with above average survival likelihood. CONCLUSIONS: The GO-FAR score can estimate, at time of admission to the hospital, the probability that a patient will survive to discharge with good neurologic outcome after an in-hospital cardiac arrest. This prognostic information can help providers frame discussions with patients on admission regarding whether to attempt cardiopulmonary resuscitation in the event of cardiac arrest.


Assuntos
Reanimação Cardiopulmonar/estatística & dados numéricos , Técnicas de Apoio para a Decisão , Parada Cardíaca/mortalidade , Idoso , Feminino , Parada Cardíaca/terapia , Humanos , Masculino , Pessoa de Meia-Idade , Sistema de Registros , Estudos Retrospectivos , Estados Unidos/epidemiologia
15.
J Am Med Inform Assoc ; 26(12): 1488-1492, 2019 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-31504592

RESUMO

OBJECTIVE: To investigate the effects of adjusting the default order set settings on telemetry usage. MATERIALS AND METHODS: We performed a retrospective, controlled, before-after study of patients admitted to a house staff medicine service at an academic medical center examining the effect of changing whether the admission telemetry order was pre-selected or not. Telemetry orders on admission and subsequent orders for telemetry were monitored pre- and post-change. Two other order sets that had no change in their default settings were used as controls. RESULTS: Between January 1, 2017 and May 1, 2018, there were 1, 163 patients admitted using the residency-customized version of the admission order set which initially had telemetry pre-selected. In this group of patients, there was a significant decrease in telemetry ordering in the post-intervention period: from 79.1% of patients in the 8.5 months prior ordered to have telemetry to 21.3% of patients ordered in the 7.5 months after (χ2 = 382; P < .001). There was no significant change in telemetry usage among patients admitted using the two control order sets. DISCUSSION: Default settings have been shown to affect clinician ordering behavior in multiple domains. Consistent with prior findings, our study shows that changing the order set settings can significantly affect ordering practices. Our study was limited in that we were unable to determine if the change in ordering behavior had significant impact on patient care or safety. CONCLUSION: Decisions about default selections in electronic health record order sets can have significant consequences on ordering behavior.


Assuntos
Sistemas de Registro de Ordens Médicas , Padrões de Prática Médica , Telemetria , Centros Médicos Acadêmicos , Humanos , Internato e Residência , Corpo Clínico Hospitalar , Estudos Retrospectivos
16.
J Am Med Inform Assoc ; 26(11): 1375-1378, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31373352

RESUMO

Clinical decision support (CDS) systems are prevalent in electronic health records and drive many safety advantages. However, CDS systems can also cause unintended consequences. Monitoring programs focused on alert firing rates are important to detect anomalies and ensure systems are working as intended. Monitoring efforts do not generally include system load and time to generate decision support, which is becoming increasingly important as more CDS systems rely on external, web-based content and algorithms. We report a case in which a web-based service caused significant increase in the time to generate decision support, in turn leading to marked delays in electronic health record system responsiveness, which could have led to patient safety events. Given this, it is critical to consider adding decision support-time generation to ongoing CDS system monitoring programs.


Assuntos
Computação em Nuvem , Sistemas de Apoio a Decisões Clínicas , Registros Eletrônicos de Saúde , Humanos , Sistemas de Registro de Ordens Médicas , Estudos de Casos Organizacionais , Fatores de Tempo
17.
Stud Health Technol Inform ; 264: 1763-1764, 2019 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-31438332

RESUMO

Clinical decision support systems (CDSS) are widely used to improve patient care and guide workflow. End users can be valuable contributors to monitoring for CDSS malfunctions. However, they often have little means of providing direct feedback on the design and build of such systems. In this study, we describe an electronic survey tool deployed from within the electronic health record and coupled with a conversation with Clinical Informaticians as a method to manage CDSS design and lifecycle.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Registros Eletrônicos de Saúde , Inquéritos e Questionários , Fluxo de Trabalho
19.
Circ Res ; 122(6): 864-876, 2018 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-29437835

RESUMO

RATIONALE: Current methods assessing clinical risk because of exercise intolerance in patients with cardiopulmonary disease rely on a small subset of traditional variables. Alternative strategies incorporating the spectrum of factors underlying prognosis in at-risk patients may be useful clinically, but are lacking. OBJECTIVE: Use unbiased analyses to identify variables that correspond to clinical risk in patients with exercise intolerance. METHODS AND RESULTS: Data from 738 consecutive patients referred for invasive cardiopulmonary exercise testing at a single center (2011-2015) were analyzed retrospectively (derivation cohort). A correlation network of invasive cardiopulmonary exercise testing parameters was assembled using |r|>0.5. From an exercise network of 39 variables (ie, nodes) and 98 correlations (ie, edges) corresponding to P<9.5e-46 for each correlation, we focused on a subnetwork containing peak volume of oxygen consumption (pVo2) and 9 linked nodes. K-mean clustering based on these 10 variables identified 4 novel patient clusters characterized by significant differences in 44 of 45 exercise measurements (P<0.01). Compared with a probabilistic model, including 23 independent predictors of pVo2 and pVo2 itself, the network model was less redundant and identified clusters that were more distinct. Cluster assignment from the network model was predictive of subsequent clinical events. For example, a 4.3-fold (P<0.0001; 95% CI, 2.2-8.1) and 2.8-fold (P=0.0018; 95% CI, 1.5-5.2) increase in hazard for age- and pVo2-adjusted all-cause 3-year hospitalization, respectively, were observed between the highest versus lowest risk clusters. Using these data, we developed the first risk-stratification calculator for patients with exercise intolerance. When applying the risk calculator to patients in 2 independent invasive cardiopulmonary exercise testing cohorts (Boston and Graz, Austria), we observed a clinical risk profile that paralleled the derivation cohort. CONCLUSIONS: Network analyses were used to identify novel exercise groups and develop a point-of-care risk calculator. These data expand the range of useful clinical variables beyond pVo2 that predict hospitalization in patients with exercise intolerance.


Assuntos
Doenças Cardiovasculares/epidemiologia , Tolerância ao Exercício , Idoso , Teste de Esforço/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade
20.
J Neural Eng ; 9(5): 056003, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22871558

RESUMO

In previous work (Georgopoulos et al 2007 J. Neural Eng. 4 349-55) we reported on the use of magnetoencephalographic (MEG) synchronous neural interactions (SNI) as a functional biomarker in Alzheimer's dementia (AD) diagnosis. Here we report on the application of canonical correlation analysis to investigate the relations between SNI and cognitive neuropsychological (NP) domains in AD patients. First, we performed individual correlations between each SNI and each NP, which provided an initial link between SNI and specific cognitive tests. Next, we performed factor analysis on each set, followed by a canonical correlation analysis between the derived SNI and NP factors. This last analysis optimally associated the entire MEG signal with cognitive function. The results revealed that SNI as a whole were mostly associated with memory and language, and, slightly less, executive function, processing speed and visuospatial abilities, thus differentiating functions subserved by the frontoparietal and the temporal cortices. These findings provide a direct interpretation of the information carried by the SNI and set the basis for identifying specific neural disease phenotypes according to cognitive deficits.


Assuntos
Doença de Alzheimer/fisiopatologia , Transtornos Cognitivos/fisiopatologia , Sincronização de Fases em Eletroencefalografia/fisiologia , Neurônios/fisiologia , Idoso , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/epidemiologia , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/epidemiologia , Humanos , Magnetoencefalografia , Masculino , Testes Neuropsicológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA